AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Memory-optimized inference

# Memory-optimized inference

Qwen3 30B A3B GGUF
Apache-2.0
Qwen3-30B-A3B is a large language model based on Qwen3-30B-A3B-Base, supporting text generation tasks, optimized for memory efficiency with ultra-low-bit quantization technology.
Large Language Model
Q
Mungert
2,135
1
Qwen3 14B GGUF
Apache-2.0
Qwen3-14B is a GGUF format model generated from Qwen/Qwen3-14B-Base, supporting text generation tasks and optimized for memory efficiency using IQ-DynamicGate ultra-low-bit quantization technology.
Large Language Model
Q
Mungert
1,597
6
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase